53 research outputs found

    Combination of optical and SAR remote sensing data for wetland mapping and monitoring

    Get PDF
    Wetlands provide many services to the environment and humans. They play a pivotal role in water quality, climate change, as well as carbon and hydrological cycles. Wetlands are environmental health indicators because of their contributions to plant and animal habitats. While a large portion of Newfoundland and Labrador (NL) is covered by wetlands, no significant efforts had been conducted to identify and monitor these valuable environments when I initiated this project. At that time, there were only two small areas in NL that had been classified using basic Remote Sensing (RS) methods with low accuracies. There was an immediate need to develop new methods for conserving and managing these vital resources using up-to-date maps of wetland distributions. In this thesis, object- and pixel-based classification methods were compared to show the high potential of the former method when medium or high spatial resolution imagery were used to classify wetlands. The maps produced using several classification algorithms were also compared to select the optimum classifier for future experiments. Moreover, a novel Multiple Classifier System (MCS), which combined several algorithms, was proposed to increase the classification accuracy of complex and similar land covers, such as wetlands. Landsat-8 images captured in different months were also investigated to select the time, for which wetlands had the highest separability using the Random Forest (RF) algorithm. Additionally, various spectral, polarimetric, texture, and ratio features extracted from multi-source optical and Synthetic Aperture Radar (SAR) data were assessed to select the most effective features for discriminating wetland classes. The methods developed during this dissertation were validated in five study areas to show their effectiveness. Finally, in collaboration with a team, a website (http://nlwetlands.ca/) and a software package were developed (named the Advanced Remote Sensing Lab (ARSeL)) to automatically preprocess optical/SAR data and classify wetlands using advanced algorithms. In summary, the outputs of this work are promising and can be incorporated into future studies related to wetlands. The province can also benefit from the results in many ways

    Automatic mapping of burned areas using Landsat 8 time-series images in Google Earth engine: a case study from Iran

    Get PDF
    Due to the natural conditions and inappropriate management responses, large part of plains and forests in Iran have been burned in recent years. Given the increasing availability of open-access satellite images and open-source software packages, we developed a fast and cost-effective remote sensing methodology for characterizing burned areas for the entire country of Iran. We mapped the fire-affected areas using a post-classification supervised method and Landsat 8 time-series images. To this end, the Google Earth Engine (GEE) and Google Colab computing services were used to facilitate the downloading and processing of images as well as allowing for effective implementation of the algorithms. In total, 13 spectral indices were calculated using Landsat 8 images and were added to the nine original bands of Landsat 8. The training polygons of the burned and unburned areas were accurately distinguished based on the information acquired from the Iranian Space Agency (ISA), Sentinel-2 images, and Fire Information for Resource Management System (FIRMS) products. A combination of Genetic Algorithm (GA) and Neural Network (NN) approaches was then implemented to specify 19 optimal features out of the 22 bands. The 19 optimal bands were subsequently applied to two classifiers of NN and Random Forest (RF) in the timespans of 1 January 2019 to 30 December 2020 and of 1 January 2021 to 30 September 2021. The overall classification accuracies of 94% and 96% were obtained for these two classifiers, respectively. The omission and commission errors of both classifiers were also less than 10%, indicating the promising capability of the proposed methodology in detecting the burned areas. To detect the burned areas caused by the wildfire in 2021, the image differencing method was used as well. The resultant models were finally compared to the MODIS fire products over 10 sampled polygons of the burned areas. Overall, the models had a high accuracy in detecting the burned areas in terms of shape and perimeter, which can be further implicated for potential prevention strategies of endangered biodiversity.Peer ReviewedPostprint (published version

    Three-Dimensional Mapping of Habitats Using Remote-Sensing Data and Machine-Learning Algorithms

    Get PDF
    Progress toward habitat protection goals can effectively be performed using satellite imagery and machine-learning (ML) models at various spatial and temporal scales. In this regard, habitat types and landscape structures can be discriminated against using remote-sensing (RS) datasets. However, most existing research in three-dimensional (3D) habitat mapping primarily relies on same/cross-sensor features like features derived from multibeam Light Detection And Ranging (LiDAR), hydrographic LiDAR, and aerial images, often overlooking the potential benefits of considering multi-sensor data integration. To address this gap, this study introduced a novel approach to creating 3D habitat maps by using high-resolution multispectral images and a LiDAR-derived Digital Surface Model (DSM) coupled with an object-based Random Forest (RF) algorithm. LiDAR-derived products were also used to improve the accuracy of the habitat classification, especially for the habitat classes with similar spectral characteristics but different heights. Two study areas in the United Kingdom (UK) were chosen to explore the accuracy of the developed models. The overall accuracies for the two mentioned study areas were high (91% and 82%), which is indicative of the high potential of the developed RS method for 3D habitat mapping. Overall, it was observed that a combination of high-resolution multispectral imagery and LiDAR data could help the separation of different habitat types and provide reliable 3D information

    Automatic Relative Radiometric Normalization of Bi-Temporal Satellite Images Using a Coarse-to-Fine Pseudo-Invariant Features Selection and Fuzzy Integral Fusion Strategies

    Get PDF
    Relative radiometric normalization (RRN) is important for pre-processing and analyzing multitemporal remote sensing (RS) images. Multitemporal RS images usually include different land use/land cover (LULC) types; therefore, considering an identical linear relationship during RRN modeling may result in potential errors in the RRN results. To resolve this issue, we proposed a new automatic RRN technique that efficiently selects the clustered pseudo-invariant features (PIFs) through a coarse-to-fine strategy and uses them in a fusion-based RRN modeling approach. In the coarse stage, an efficient difference index was first generated from the down-sampled reference and target images by combining the spectral correlation, spectral angle mapper (SAM), and Chebyshev distance. This index was then categorized into three groups of changed, unchanged, and uncertain classes using a fast multiple thresholding technique. In the fine stage, the subject image was first segmented into different clusters by the histogram-based fuzzy c-means (HFCM) algorithm. The optimal PIFs were then selected from unchanged and uncertain regions using each cluster’s bivariate joint distribution analysis. In the RRN modeling step, two normalized subject images were first produced using the robust linear regression (RLR) and cluster-wise-RLR (CRLR) methods based on the clustered PIFs. Finally, the normalized images were fused using the Choquet fuzzy integral fusion strategy for overwhelming the discontinuity between clusters in the final results and keeping the radiometric rectification optimal. Several experiments were implemented on four different bi-temporal satellite images and a simulated dataset to demonstrate the efficiency of the proposed method. The results showed that the proposed method yielded superior RRN results and outperformed other considered well-known RRN algorithms in terms of both accuracy level and execution time.publishedVersio

    Wetland Mapping in Great Lakes Using Sentinel-1/2 Time-Series Imagery and DEM Data in Google Earth Engine

    Get PDF
    The Great Lakes (GL) wetlands support a variety of rare and endangered animal and plant species. Thus, wetlands in this region should be mapped and monitored using advanced and reliable techniques. In this study, a wetland map of the GL was produced using Sentinel-1/2 datasets within the Google Earth Engine (GEE) cloud computing platform. To this end, an object-based supervised machine learning (ML) classification workflow is proposed. The proposed method contains two main classification steps. In the first step, several non-wetland classes (e.g., Barren, Cropland, and Open Water), which are more distinguishable using radar and optical Remote Sensing (RS) observations, were identified and masked using a trained Random Forest (RF) model. In the second step, wetland classes, including Fen, Bog, Swamp, and Marsh, along with two non-wetland classes of Forest and Grassland/Shrubland were identified. Using the proposed method, the GL were classified with an overall accuracy of 93.6% and a Kappa coefficient of 0.90. Additionally, the results showed that the proposed method was able to classify the wetland classes with an overall accuracy of 87% and a Kappa coefficient of 0.91. Non-wetland classes were also identified more accurately than wetlands (overall accuracy = 96.62% and Kappa coefficient = 0.95)

    Wetland Mapping in Great Lakes Using Sentinel-1/2 Time-Series Imagery and DEM Data in Google Earth Engine

    Get PDF
    The Great Lakes (GL) wetlands support a variety of rare and endangered animal and plant species. Thus, wetlands in this region should be mapped and monitored using advanced and reliable techniques. In this study, a wetland map of the GL was produced using Sentinel-1/2 datasets within the Google Earth Engine (GEE) cloud computing platform. To this end, an object-based supervised machine learning (ML) classification workflow is proposed. The proposed method contains two main classification steps. In the first step, several non-wetland classes (e.g., Barren, Cropland, and Open Water), which are more distinguishable using radar and optical Remote Sensing (RS) observations, were identified and masked using a trained Random Forest (RF) model. In the second step, wetland classes, including Fen, Bog, Swamp, and Marsh, along with two non-wetland classes of Forest and Grassland/Shrubland were identified. Using the proposed method, the GL were classified with an overall accuracy of 93.6% and a Kappa coefficient of 0.90. Additionally, the results showed that the proposed method was able to classify the wetland classes with an overall accuracy of 87% and a Kappa coefficient of 0.91. Non-wetland classes were also identified more accurately than wetlands (overall accuracy = 96.62% and Kappa coefficient = 0.95)

    ELULC-10, a 10 m European land use and land cover map using Sentinel and landsat data in Google Earth Engine

    Get PDF
    Land Use/Land Cover (LULC) maps can be effectively produced by cost-effective and frequent satellite observations. Powerful cloud computing platforms are emerging as a growing trend in the high utilization of freely accessible remotely sensed data for LULC mapping over large-scale regions using big geodata. This study proposes a workflow to generate a 10 m LULC map of Europe with nine classes, ELULC-10, using European Sentinel-1/-2 and Landsat-8 images, as well as the LUCAS reference samples. More than 200 K and 300 K of in situ surveys and images, respectively, were employed as inputs in the Google Earth Engine (GEE) cloud computing platform to perform classification by an object-based segmentation algorithm and an Artificial Neural Network (ANN). A novel ANN-based data preparation was also presented to remove noisy reference samples from the LUCAS dataset. Additionally, the map was improved using several rule-based post-processing steps. The overall accuracy and kappa coefficient of 2021 ELULC-10 were 95.38% and 0.94, respectively. A detailed report of the classification accuracies was also provided, demonstrating an accurate classification of different classes, such as Woodland and Cropland. Furthermore, rule-based post processing improved LULC class identifications when compared with current studies. The workflow could also supply seasonal, yearly, and change maps considering the proposed integration of complex machine learning algorithms and large satellite and survey data.Peer ReviewedPostprint (published version

    A Collection of Novel Algorithms for Wetland Classification with SAR and Optical Data

    Get PDF
    Wetlands are valuable natural resources that provide many benefits to the environment, and thus, mapping wetlands is crucially important. We have developed land cover and wetland classification algorithms that have general applicability to different geographical locations. We also want a high level of classification accuracy (i.e., more than 90%). Over that past 2 years, we have been developing an operational wetland classification approach aimed at a Newfoundland/Labrador province-wide wetland inventory. We have developed and published several algorithms to classify wetlands using multi-source data (i.e., polarimetric SAR and multi-spectral optical imagery), object-based image analysis, and advanced machine-learning tools. The algorithms have been tested and verified on many large pilot sites across the province and provided overall and class-based accuracies of about 90%. The developed methods have general applicability to other Canadian provinces (with field validation data) allowing the creation of a nation-wide wetland inventory system

    Ocean remote sensing techniques and applications: a review (Part II)

    Get PDF
    As discussed in the first part of this review paper, Remote Sensing (RS) systems are great tools to study various oceanographic parameters. Part I of this study described different passive and active RS systems and six applications of RS in ocean studies, including Ocean Surface Wind (OSW), Ocean Surface Current (OSC), Ocean Wave Height (OWH), Sea Level (SL), Ocean Tide (OT), and Ship Detection (SD). In Part II, the remaining nine important applications of RS systems for ocean environments, including Iceberg, Sea Ice (SI), Sea Surface temperature (SST), Ocean Surface Salinity (OSS), Ocean Color (OC), Ocean Chlorophyll (OCh), Ocean Oil Spill (OOS), Underwater Ocean, and Fishery are comprehensively reviewed and discussed. For each application, the applicable RS systems, their advantages and disadvantages, various RS and Machine Learning (ML) techniques, and several case studies are discussed.Peer ReviewedPostprint (published version

    Marine Habitat Mapping Using Bathymetric LiDAR Data: A Case Study from Bonne Bay, Newfoundland

    No full text
    Marine habitats provide various benefits to the environment and humans. In this regard, an accurate marine habitat map is an important component of effective marine management. Newfoundland’s coastal area is covered by different marine habitats, which should be correctly mapped using advanced technologies, such as remote sensing methods. In this study, bathymetric Light Detection and Ranging (LiDAR) data were applied to accurately discriminate different habitat types in Bonne Bay, Newfoundland. To this end, the LiDAR intensity image was employed along with an object-based Random Forest (RF) algorithm. Two types of habitat classifications were produced: a two-class map (i.e., Vegetation and Non-Vegetation) and a five-class map (i.e., Eelgrass, Macroalgae, Rockweed, Fine Sediment, and Gravel/Cobble). It was observed that the accuracies of the produced habitat maps were reasonable considering the existing challenges, such as the error of the LiDAR data and lacking enough in situ samples for some of the classes such as macroalgae. The overall classification accuracies for the two-class and five-class maps were 87% and 80%, respectively, indicating the high capability of the developed machine learning model for future marine habitat mapping studies. The results also showed that Eelgrass, Fine Sediment, Gravel/Cobble, Macroalgae, and Rockweed cover 22.4% (3.66 km2), 51.4% (8.39 km2), 13.5% (2.21 km2), 6.9% (1.12 km2), and 5.8% (0.95 km2) of the study area, respectively
    • …
    corecore